A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks

نویسنده

  • Jürgen Schmidhuber
چکیده

There are two basic methods for performing steepest descent in fully recurrent networks with n noninput units and m = O(n) input units. Backpropagation through time (BPTT) [e.g., Williams and Peng (1990)l requires potentially unlimited storage in proportion to the length of the longest training sequence but needs only O(n’) computations per time step. BPTT is the method of choice if training sequences are known to have less than O(n) time steps. For training sequences involving many more time steps than n, for training sequences of unknown length, and for on-line learning in general one would like to have an algorithm with upper bounds for storage and for computations required per time step. Such an algorithm is the RTRL algorithm (Robinson and .Fallside 1987; Williams and Zipser 1989). It requires only fixed-size storage of the order O(n3) but is computationally expensive: It requires O(n4) operations per time step.’ The algorithm described herein2 requires O(n3) storage, too. Every O(n) time steps it requires O(n4) operations, but on all other time steps it requires only O(n2) operations. This cuts the average time complexity per time step to o(n3).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Learning Algorithm for Continually Running Fully Recurrent Neural Networks

The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precisely deened training interval, operating while the network runs; and (2) the disadvantage that the...

متن کامل

Using an Evaluator Fixed Structure Learning Automata in Sampling of Social Networks

Social networks are streaming, diverse and include a wide range of edges so that continuously evolves over time and formed by the activities among users (such as tweets, emails, etc.), where each activity among its users, adds an edge to the network graph. Despite their popularities, the dynamicity and large size of most social networks make it difficult or impossible to study the entire networ...

متن کامل

TRTRL: A Localized Resource-Efficient Learning Algorithm for Recurrent Neural Networks

This paper introduces an efficient, low-complexity online learning algorithm for recurrent neural networks. The approach is based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated either with its input or ouput links. As a consequence, storage requirements are reduced from O(N) to O(N) and the computational complexi...

متن کامل

Reinforcement Learning in Markovian and Non-Markovian Environments

This work addresses three problems with reinforcement learning and adap-tive neuro-control: 1. Non-Markovian interfaces between learner and environment. 2. On-line learning based on system realization. 3. Vector-valued adaptive critics. An algorithm is described which is based on system realization and on two interacting fully recurrent continually running networks which may learn in parallel. ...

متن کامل

Locally Connected Recurrent Networks

The fully connected recurrent network (FRN) using the on-line training method, Real Time Recurrent Learning (RTRL), is computationally expensive. It has a computational complexity of O(N 4) and storage complexity of O(N 3), where N is the number of non-input units. We have devised a locally connected recurrent model which has a much lower complexity in both computational time and storage space....

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neural Computation

دوره 4  شماره 

صفحات  -

تاریخ انتشار 1992